77 research outputs found

    A component-oriented programming framework for developing embedded mobile robot software using PECOS model

    Get PDF
    A practical framework for component-based software engineering of embedded real-time systems, particularly for autonomous mobile robot embedded software development using PECOS component model is proposed The main features of this framework are: (1) use graphical representation for components definition and composition; (2) target C language for optimal code generation with small micro-controller; and (3) does not requires run-time support except for real-time kernel. Real-time implementation indicates that, the PECOS component model together with the proposed framework is suitable for resource constrained embedded systems

    A weakly hard scheduling approach of partitioned scheduling on multiprocessor systems

    Get PDF
    Real-time systems or tasks can be classified into three categories, based on the “seriousness” of deadline misses – hard, soft and weakly hard real-time tasks. The consequences of a deadline miss of a hard real-time task can be prohibitively expensive because all the tasks must meet their deadlines whereas soft real-time tasks tolerate “some” deadline misses. Meanwhile, in a weakly hard real-time task, the distribution of its met and missed deadlines is stated and specified precisely. As real-time application systems increasingly come to be implemented upon multiprocessor environments, thus, this study applies multiprocessor scheduling approach for verification of weakly hard real-time tasks and to guaranteeing the timing requirements of the tasks. In fact, within the multiprocessor, the task allocation problem seem even harder than in uniprocessor case; thus, in order to cater that problem, the sufficient and efficient scheduling algorithm supported by accurate schedulability analysis technique is present to provide weakly hard real-time guarantees. In this paper, a weakly hard scheduling approach has been proposed and schedulability analysis of proposed approach consists of the partitioned multiprocessor scheduling techniques with solutions for the bin-packing problem, called R-BOUND-MP-NFRNS (R-BOUND-MP with next-fit-ring noscaling) combining with the exact analysis, named hyperperiod analysis and deadline models; weakly hard constraints and µ-pattern under static priority scheduling. Then, Matlab simulation tool is used in order to validate the result of analysis. From the evaluation results, it can be proven that the proposed approach outperforms the existing approaches in terms of satisfaction of the tasks deadlines

    A reliability estimation model using integrated tasks and resources

    Get PDF
    With the growing size of modern systems, the composition of a number of resources for a system is becoming increasingly more complex. Thus, a reliability analysis for that system is essential, especially during design time. The reliability estimation model is rapidly becoming a crucial part of the system development life cycle, and a new model is needed to enable an early analysis of reliability estimation, especially for the system under study. However, the existing approach neglects to address the correlation between resource and system task for estimation of system reliability. This subsequently restricts the accuracy of estimation results and thus, could misguide the reliability analysis in general. This paper proposes a reliability estimation model that enables the computation of the system reliability as a product of resource and system task. The system task reliability is treated as a transition probability that the resource may execute for subsequent resources. To validate the model, one real case study is used and the accuracy of the estimation result is compared with the actual reliability values. The result shows the estimation accuracy is considered at an acceptable level and some of the scenarios have recorded higher accuracy values than previous models. To evaluate our model, the result is compared with that of the existing model and shows our model providing a more accurate estimation for a more complex scenario

    Binary black hole-based optimization for T-way testing

    Get PDF
    Software testing is an important process in software development life cycle, which aims to guarantee the quality of software and reduce the number of errors and bugs. In such a process, software inputs and parameters are used to create a set of testing cases. Nevertheless, the number of testing cases increases enormously when considering all combinations of those inputs. Although t-way testing can reduce the test cases, generating the minimum, yet representative t-way testing set is challenging due to the large search space, which renders finding the best solution computationally prohibitive. The extant solutions suffer from the sensitivity to the random initialization and the subjectivity to the local minima, which adversely affects the reproducibility of these algorithms and obstructs finding the optimal solution. To this end, this paper proposes a novel meta-heuristic searching algorithm called Binary Black Hole (BBH) optimization that formulates the t-way testing as a binary optimization problem. Experimental results show the superiority of BBH over the famous Binary Particle Swarm Optimization (BPSO) algorithm. The achieved improvement shows the capability of BBH in generating smaller covering arrays with the same t-strength compared to those generated by BPSO

    Test case generation technique for concurrency in activity diagram

    Get PDF
    Presently, the application of Model-Based Testing (MBT) using Unified Modelling Language (UML) has attracted the attention of many practitioners to use UML diagrams for generation of test cases. By using this technique, early detection of faults can be achieved at the design phase. However, some UML diagrams have limitations in generating test cases such as the need for a loop combination fragment to describe looping, iteration process and combination fragment with the par operator to interpret concurrency activities. To overcome these issues, a feature analysis was conducted to observe the outcome of test case generation using similar cases but, by using different techniques and UML diagrams. Based on the results, a guideline for selecting UML diagrams in the generation of test cases based on the different features of software system in the cases was developed. However, system design of concurrent software is complex, leading to issues in system testing such as synchronization, non-deterministic, path explosion and deadlock. In this research, an enhancement of the generate-activity-paths algorithm as a test case generation technique was developed to solve the non-deterministic problem of concurrent system. As the test cases are generated in a random order, a prioritization technique using genetic algorithm was applied to find the critical path that must be tested first from the test paths produced. The technique was implemented on the Conference Management System case study and evaluated using cyclomatic complexity, branch coverage, mutation analysis and average percentage of fault detected (APFD) to measure the effectiveness and quality of the test cases in comparison to those using the original technique. Results indicated that the technique achieved 100% basis path and branch coverage criteria similar to the original technique. Moreover, it is also capable of revealing non-deterministic faults by injecting concurrency coverage criteria into the test paths, which was not possible using the original technique. Additionally, prioritization of test paths yielded an APFD value of 43% which is better and higher than the non-prioritized test paths (22%). This result signified that the usage of prioritization technique leads to an improve detection rate of severe faults as compared to applying random order

    Evaluation of Software Product Line Test Case Prioritization Technique

    Get PDF
    Software product line (SPL) engineering paradigm is commonly used to handle commonalities and variabilities of business applications to satisfy the specific needs or goal of a particular market. However, due to time and space complexities, testing all products is not feasible and SPL testing proven to be difficult due to combinatorial explosion of the number of products to be considered. Combinatorial interaction testing (CIT) is suggested to reduce the size of test suites to overcome budget limitations and deadlines. CIT is conducted to fulfill certain quality attributes. This method can be further improvised through the prioritization of list configuration generated from CIT to gain better results in terms of efficiency and scalability, However, to the best of our knowledge, not much research have been done to evaluate existing Test Case Prioritization (TCP)  techniques in SPL. This paper provides a survey of existing works on test case prioritization technique. This study provide classification and compare the best technique, trends, gaps and proposed frameworks based on the literature. The evaluation and discussion is using Normative Information Model-based Systems Analysis and Design (NIMSAD) on aspects that include context, content and validation. The discussion highlights the lack of technique for scalability issue in SPL with most of the work is on academia setting but not on industrial practices

    A Comparison on Similarity Distances and Prioritization Techniques for Early Fault Detection Rate

    Get PDF
    Nowadays, the Software Product Line (SPL) had replaced the conventional product development system. Many researches have been carried out to ensure the SPL usage prune the benefits toward the recent technologies. However, there are still some problems exist within the concept itself, such as variability and commonality. Due to its variability, exhaustive testing is not possible. Various solutions have been proposed to lessen this problem. One of them is prioritization technique, in which it is used to arrange back the test cases to achieve a specific performance goal. In this paper, the early fault detection is selected as the performance goal. Similarity function is used within our prioritization approach. Five different types of prioritization techniques are used in the experiment. The experiment results indicate that the greed-aided-clustering ordered sequence (GOS) shows the highest rate of early fault detection

    A comparative evaluation of state-of-the-art cloud migration optimization approaches

    Get PDF
    Cloud computing has become more attractive for consumers to migrate their applications to the cloud environment. However, because of huge cloud environments, application customers and providers face the problem of how to assess and make decisions to choose appropriate service providers for migrating their applications to the cloud. Many approaches have investigated how to address this problem. In this paper we classify these approaches into non-evolutionary cloud migration optimization approaches and evolutionary cloud migration optimization approaches. Criteria including cost, QoS, elasticity and degree of migration optimization have been used to compare the approaches. Analysis of the results of comparative evaluations shows that a Multi-Objectives optimization approach provides a better solution to support decision making to migrate an application to the cloud environment based on the significant proposed criteria. The classification of the investigated approaches will help practitioners and researchers to deliver and build solid approaches

    Fuzzy C-mean missing data imputation for analogy-based effort estimation

    Get PDF
    The accuracy of effort estimation in one of the major factors in the success or failure of software projects. Analogy-Based Estimation (ABE) is a widely accepted estimation model since its flow human nature in selecting analogies similar in nature to the target project. The accuracy of prediction in ABE model in strongly associated with the quality of the dataset since it depends on previous completed projects for estimation. Missing Data (MD) is one of major challenges in software engineering datasets. Several missing data imputation techniques have been investigated by researchers in ABE model. Identification of the most similar donor values from the completed software projects dataset for imputation is a challenging issue in existing missing data techniques adopted for ABE model. In this study, Fuzzy C-Mean Imputation (FCMI), Mean Imputation (MI) and K-Nearest Neighbor Imputation (KNNI) are investigated to impute missing values in Desharnais dataset under different missing data percentages (Desh-Miss1, Desh-Miss2) for ABE model. FCMI-ABE technique is proposed in this study. Evaluation comparison among MI, KNNI, and (ABE-FCMI) is conducted for ABE model to identify the suitable MD imputation method. The results suggest that the use of (ABE-FCMI), rather than MI and KNNI, imputes more reliable values to incomplete software projects in the missing datasets. It was also found that the proposed imputation method significantly improves software development effort prediction of ABE model

    Non-Functional Requirement Traceability Process Model for Agile Software Development

    Get PDF
    Agile methodologies have been appreciated for the fast delivery of software. They are criticized for poor handling of Non-Functional Requirements (NFRs) such as security and performance and difficulty in tracing the changes caused by updates in NFR that are also associated with Functional Requirements (FRs).This paper presents a novel approach named Traceability process model of Agile Software Development for Tracing NFR change impact (TANC). In order to validate TANC’s compatibility with most of Agile process models, we present a logical model that synchronizes TANC with the two of enhanced models: secure feature-driven development (SFDD) and secured scrum (SScrum).Then, we conducted a case study on TANC using a tool support called Sagile. In terms of adaptability with agile process model, the logical model could be depicted in SFDD and the case study proved that TANC is carried out successfully in SFDD
    corecore